14 research outputs found

    SafePASS - Transforming marine accident response

    Get PDF
    The evacuation of a ship is the last line of defence against human loses in case of emergencies in extreme fire and flooding casualties. Since the establishment of the International Maritime Organisation (IMO), Maritime Safety is its cornerstone with the Safety of Life at Sea Convention (SOLAS) spearheading its relentless efforts to reduce risks to human life at sea. However, the times are changing. On one hand, we have the new opportunities created with the vast technological advances of today. On the other, we are facing new challenges, with the ever-increasing size of the passenger ships and the societal pressure for a continuous improvement of maritime safety. In this respect, the EU-funded Horizon 2020 Research and Innovation Programme project SafePASS, presented herein, aims to radically redefine the evacuation processes, the involved systems and equipment and challenge the international regulations for large passenger ships, in all environments, hazards and weather conditions, independently of the demographic factors. The project consortium, which brings together 15 European partners from industry, academia and classification societies. The SafePASS vision and plan for a safer, faster and smarter ship evacuation involves: i) a holistic and seamless approach to evacuation, addressing all states from alarm to rescue, including the design of the next generation of life-saving appliances and; ii) the integration of ‘smart’ technology and Augmented Reality (AR) applications to provide individual guidance to passengers, regardless of their demographic characteristics or hazard (flooding or fire), towards the optimal route of escape

    SafePASS : a new chapter for passenger ship evacuation and marine emergency response

    Get PDF
    Despite the current high level of safety and the efforts to make passenger ships resilient to most fire and flooding scenarios, there are still gaps and challenges in the marine emergency response and ship evacuation processes. Those challenges arise from the fact that both processes are complex, multi-variable problems that rely on parameters involving not only people and technology but also procedural and managerial issues. SafePASS Project, funded under EU's Horizon 2020 Research and Innovation Programme, is set to radically redefine the evacuation processes by introducing new equipment, expanding the capabilities of legacy systems on-board, proposing new Life-Saving Appliances and ship layouts, and challenging the current international regulations, hence reducing the uncertainty, and increasing the efficiency in all the stages of ship evacuation and abandonment process

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Design and Implementation of a UAV-Based Airborne Computing Platform for Computer Vision and Machine Learning Applications

    No full text
    Visual sensing of the environment is crucial for flying an unmanned aerial vehicle (UAV) and is a centerpiece of many related applications. The ability to run computer vision and machine learning algorithms onboard an unmanned aerial system (UAS) is becoming more of a necessity in an effort to alleviate the communication burden of high-resolution video streaming, to provide flying aids, such as obstacle avoidance and automated landing, and to create autonomous machines. Thus, there is a growing interest on the part of many researchers in developing and validating solutions that are suitable for deployment on a UAV system by following the general trend of edge processing and airborne computing, which transforms UAVs from moving sensors into intelligent nodes that are capable of local processing. In this paper, we present, in a rigorous way, the design and implementation of a 12.85 kg UAV system equipped with the necessary computational power and sensors to serve as a testbed for image processing and machine learning applications, explain the rationale behind our decisions, highlight selected implementation details, and showcase the usefulness of our system by providing an example of how a sample computer vision application can be deployed on our platform

    A Comparative Study of Autonomous Object Detection Algorithms in the Maritime Environment Using a UAV Platform

    No full text
    Maritime operations rely heavily on surveillance and require reliable and timely data that can inform decisions and planning. Critical information in such cases includes the exact location of objects in the water, such as vessels, persons, and others. Due to the unique characteristics of the maritime environment, the location of even inert objects changes through time, depending on the weather conditions, water currents, etc. Unmanned aerial vehicles (UAVs) can be used to support maritime operations by providing live video streams and images from the area of operations. Machine learning algorithms can be developed, trained, and used to automatically detect and track objects of specific types and characteristics. EFFECTOR is an EU-funded project, developing an Interoperability Framework for maritime surveillance. Within the project, we developed an embedded system that employs machine learning algorithms, allowing a UAV to autonomously detect objects in the water and keep track of their changing position through time. Using the on-board computation unit of the UAV, we ran and present the results of a series of comparative tests among possible architecture sizes and training datasets for the detection and tracking of objects in the maritime environment. We tested architectures based on their efficiency, accuracy, and speed. A combined solution for training the datasets is suggested, providing optimal efficiency and accuracy

    The Implementation of a Smart Lifejacket for Assisting Passengers in the Evacuation of Large Passenger Ships

    No full text
    The evacuation and abandonment of large passenger ships, involving thousands of passengers, is a safety-critical task where techniques and systems that can improve the complex decision-making process and the timely response to emergencies on board are of vital importance. Current evacuation systems and processes are based on predefined and static exit signs, information provided to the passengers in the form of evacuation drills, emergency information leaflets and public announcements systems. It is mandatory for passengers to wear lifejackets during an evacuation, which are made of buoyant or inflatable material to keep them afloat in the water. Time is the most critical attribute in ship evacuation and can significantly affect the overall evacuation process in case passengers do not reach their embarkation stations in a timely manner. Moreover, extreme conditions and hazards, such as fire or flooding, can prevent and hinder the timely evacuation process. To improve the current evacuation systems onboard large passenger ships, a smart lifejacket has been designed and implemented within the context of the project SafePASS. The proposed smart lifejacket integrates indoor localization and navigation functionality to assist passengers during the evacuation process. Once the passenger location is calculated within the ship, the navigation feature guides the passengers along an escape route using vibration motors attached to the lifejacket. This is done in the form of haptic cues to help passengers reach their destination, especially in low-visibility conditions and in case they are left behind or lost. This can increase passenger safety and reduce the total evacuation time, as well as support dynamic evacuation scenarios where the predefined routes and static exit routes may not be available due to fire or flooding incidents

    A Video Analytics System for Person Detection Combined with Edge Computing

    No full text
    Ensuring citizens’ safety and security has been identified as the number one priority for city authorities when it comes to the use of smart city technologies. Automatic understanding of the scene, and the associated provision of situational awareness for emergency situations, are able to efficiently contribute to such domains. In this study, a Video Analytics Edge Computing (VAEC) system is presented that performs real-time enhanced situation awareness for person detection in a video surveillance manner that is also able to share geolocated person detection alerts and other accompanied crucial information. The VAEC system adopts state-of-the-art object detection and tracking algorithms, and it is integrated with the proposed Distribute Edge Computing Internet of Things (DECIoT) platform. The aforementioned alerts and information are able to be shared, though the DECIoT, to smart city platforms utilizing proper middleware. To verify the utility and functionality of the VAEC system, extended experiments were performed (i) in several light conditions, (ii) using several camera sensors, and (iii) in several use cases, such as installed in fixed position of a building or mounted to a car. The results highlight the potential of VAEC system to be exploited by decision-makers or city authorities, providing enhanced situational awareness

    Embedded Vision Intelligence for the Safety of Smart Cities

    No full text
    Advances in Artificial intelligence (AI) and embedded systems have resulted on a recent increase in use of image processing applications for smart cities’ safety. This enables a cost-adequate scale of automated video surveillance, increasing the data available and releasing human intervention. At the same time, although deep learning is a very intensive task in terms of computing resources, hardware and software improvements have emerged, allowing embedded systems to implement sophisticated machine learning algorithms at the edge. Additionally, new lightweight open-source middleware for constrained resource devices, such as EdgeX Foundry, have appeared to facilitate the collection and processing of data at sensor level, with communication capabilities to exchange data with a cloud enterprise application. The objective of this work is to show and describe the development of two Edge Smart Camera Systems for safety of Smart cities within S4AllCities H2020 project. Hence, the work presents hardware and software modules developed within the project, including a custom hardware platform specifically developed for the deployment of deep learning models based on the I.MX8 Plus from NXP, which considerably reduces processing and inference times; a custom Video Analytics Edge Computing (VAEC) system deployed on a commercial NVIDIA Jetson TX2 platform, which provides high level results on person detection processes; and an edge computing framework for the management of those two edge devices, namely Distributed Edge Computing framework, DECIoT. To verify the utility and functionality of the systems, extended experiments were performed. The results highlight their potential to provide enhanced situational awareness and demonstrate the suitability for edge machine vision applications for safety in smart cities

    Pulsatile Interleukin-6 Leads CRH Secretion and Is Associated With Myometrial Contractility During the Active Phase of Term Human Labor

    No full text
    Objective: Our objective was to investigate IL-6 and CRH secretion during the active phase of human labor and to define their potential involvement in myometrial contractility. Study Design: Twenty-two primigravid women were studied for 90 minutes during the active phase of term labor by serial plasma sampling every 3 minutes for measurement of IL-6 and CRH concentrations. Uterine contractions, measured by cardiotocograph, were evaluated in Montevideo units. Basic, quantitative, pulsatility, and time cross-correlation statistical analyses were performed. Results: By linear regression analysis, a positive correlation was observed between IL-6 and CRH total mean area under the curve above 0 (r = 0.76184, P = .006). Mean number of pulses was 2.00 +/- 0.70 and 3.33 +/- 1.29 for IL-6 and CRH, respectively. There was a significant positive correlation between IL-6 and CRH over time, peaking at the 12-minute interval, with IL-6 leading CRH. Also, there was a significant positive correlation between myometrial contractility expressed in Montevideo units and IL-6 concentrations over time, starting at +51 minutes and ending at +57 minutes with myometrial contractility leading IL-6. No significant correlation was found between myometrial contractility and CRH concentrations over time. Conclusion: IL-6 and CRH are both secreted in a pulsatile fashion during the active phase of human labor. The time-integrated concentrations of the two hormones are positively correlated, with IL-6 leading CRH secretion. It appears, thus, that proinflammatory mediators may be direct and/or indirect promoters of placental CRH release. Furthermore, the secretion of IL-6, which is a myokine, seems to be associated positively with uterine contractility. Additional studies are needed to elucidate the combined effect of inflammation, placental CRH release, and/or the receptors of the latter in parturition
    corecore